Incremental Learning with a Stopping Criterion - Experimental Results

نویسندگان

  • Rachida Chentouf
  • Christian Jutten
چکیده

We recently proposed a new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit (or small 2-or 3-neuron networks) which is trained to learn the error of the network. The incremental step is repeated until the error of the current network can be considered as a noise. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. In this paper, we develop experimental comparison between few alternatives of the incremental algorithm and classic backpropagation algorithm, according to convergence, speed of convergence and optimal number of neurons. Experimental results point out the eecacy of this new incremental scheme especially to avoid spurious minima and to design a network with a well-suited size. The number of basic operations is also decreased and gives an average gain on convergence speed of about 20%.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning a Stopping Criterion for Active Learning for Word Sense Disambiguation and Text Classification

In this paper, we address the problem of knowing when to stop the process of active learning. We propose a new statistical learning approach, called minimum expected error strategy, to defining a stopping criterion through estimation of the classifier’s expected error on future unlabeled examples in the active learning process. In experiments on active learning for word sense disambiguation and...

متن کامل

Comparing different stopping criteria for fuzzy decision tree induction through IDFID3

Fuzzy Decision Tree (FDT) classifiers combine decision trees with approximate reasoning offered by fuzzy representation to deal with language and measurement uncertainties. When a FDT induction algorithm utilizes stopping criteria for early stopping of the tree's growth, threshold values of stopping criteria will control the number of nodes. Finding a proper threshold value for a stopping crite...

متن کامل

An Introduction to a New Criterion Proposed for Stopping GA Optimization Process of a Laminated Composite Plate

Several traditional stopping criteria in Genetic Algorithms (GAs) are applied to the optimization process of a typical laminated composite plate. The results show that neither of the criteria of the type of statistical parameters, nor those of the kinds of theoretical models performs satisfactorily in determining the interruption point for the GA process. Here, considering the configuration of ...

متن کامل

Multi-Criteria-Based Strategy to Stop Active Learning for Data Annotation

In this paper, we address the issue of deciding when to stop active learning for building a labeled training corpus. Firstly, this paper presents a new stopping criterion, classification-change, which considers the potential ability of each unlabeled example on changing decision boundaries. Secondly, a multi-criteriabased combination strategy is proposed to solve the problem of predefining an a...

متن کامل

Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels

We present some greedy learning algorithms for building sparse nonlinear regression and classification models from observational data using Mercer kernels. Our objective is to develop efficient numerical schemes for reducing the training and runtime complexities of kernel-based algorithms applied to large datasets. In the spirit of Natarajan’s greedy algorithm (Natarajan, 1995), we iteratively ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995